Нед. 6. Системы линейных уравнений, метод Гаусса, подпространства, свойства матриц, ранг, ортогональность, обратимость
1. Краткое содержание
1.1 Системы линейных уравнений
1.1.1 Введение
Система линейных уравнений (linear system of equations) — это несколько линейных уравнений с одним и тем же набором неизвестных. Обычно нужно найти значения неизвестных, при которых выполняются все уравнения одновременно.
1.1.2 Матричная запись
Систему удобно записывать в виде \(Ax=b\). Например, для \[ \begin{cases} 2x + 3y + 1z = 8 \\ 4x + 7y + 5z = 20 \\ -2y + 2z = 0 \end{cases} \] имеем:
- \(A\) — матрица коэффициентов (coefficient matrix). \[ A = \begin{bmatrix} 2 & 3 & 1 \\ 4 & 7 & 5 \\ 0 & -2 & 2 \end{bmatrix} \]
- \(x\) — вектор неизвестных (vector of unknowns). \[ x = \begin{bmatrix} x \\ y \\ z \end{bmatrix} \]
- \(b\) — вектор правых частей (vector of constant terms). \[ b = \begin{bmatrix} 8 \\ 20 \\ 0 \end{bmatrix} \]
Расширенная матрица (augmented matrix) \([A|b]\) получается приписыванием столбца \(b\) к \(A\): \[ [A|b] = \left[\begin{array}{ccc|c} 2 & 3 & 1 & 8 \\ 4 & 7 & 5 & 20 \\ 0 & -2 & 2 & 0 \end{array}\right] \]
1.1.3 Типы решений
Возможны случаи:
- Нет решения: система несовместна (inconsistent / incompatible).
- Ровно одно решение: система совместна и имеет единственное решение (unique solution).
- Бесконечно много решений: система совместна, но неопределённа (indeterminate).
1.2 Методы решения линейных систем
1.2.1 Метод Гаусса (Gaussian elimination)
Расширенную матрицу приводят к ступенчатому виду (row echelon form, REF) элементарными преобразованиями строк, затем используют обратный ход (back substitution).
Элементарные преобразования строк (elementary row operations):
- Перестановка двух строк.
- Умножение строки на ненулевой скаляр.
- Прибавление к одной строки другой, умноженной на скаляр.
1.2.2 Правило Крамера (Cramer’s rule)
Для \(Ax=b\) при \(\det(A)\neq 0\): \[ x_i = \frac{\det(A_i)}{\det(A)} \] где \(A_i\) получается заменой \(i\)-го столбца \(A\) на \(b\).
1.2.3 Метод обратной матрицы
Если \(A\) квадратная и обратима, то \[ x = A^{-1}b \] Обратную матрицу можно искать через присоединённую (adjugate) или метод Гаусса–Жордана (Gauss–Jordan elimination).
1.3 Метод Гаусса: детали
1.3.1 Пивоты и ступенчатые формы
- Пивот (pivot) — первый ненулевой элемент строки в REF; слева и снизу от него — нули.
- REF (row echelon form): ненулевые строки выше нулевых; ведущие элементы смещаются вправо при движении вниз.
- RREF (reduced row echelon form): REF, где каждый пивот равен \(1\) и единственный ненулевой в своём столбце.
1.3.2 Ведущие и свободные неизвестные
После приведения к RREF:
- Ведущие переменные (leading variables) соответствуют столбцам с пивотами.
- Свободные переменные (free variables) — столбцам без пивотов; им можно придавать произвольные значения, что даёт бесконечно много решений.
1.3.3 Однородные системы
Однородная система \(Ax=0\) всегда имеет тривиальное решение \(x=0\). Нетривиальное решение есть тогда и только тогда, когда есть хотя бы одна свободная переменная. Если уравнений меньше, чем неизвестных, нетривиальных решений гарантированно бесконечно много.
1.4 Четыре фундаментальных подпространства
Для матрицы \(A\) размера \(m\times n\) выделяют четыре подпространства.
1.4.1 Столбцовое пространство \(C(A)\) (column space)
- Это все линейные комбинации столбцов \(A\) — ровно те правые части \(b\), для которых \(Ax=b\) разрешима.
- Подпространство в \(\mathbb{R}^m\).
- Базис: пивотные столбцы исходной матрицы \(A\).
- Размерность = ранг (rank) \(r\).
1.4.2 Ядро \(N(A)\) (nullspace)
- Множество решений однородной системы \(Ax=0\).
- Подпространство в \(\mathbb{R}^n\).
- Базис задают специальные решения по одному на каждую свободную переменную.
- Размерность = дефект / nullity \(= n-r\).
1.4.3 Строчное пространство \(C(A^T)\) (row space)
- Все линейные комбинации строк \(A\); то же, что \(C(A^T)\).
- Подпространство в \(\mathbb{R}^n\).
- Базис: ненулевые строки REF/RREF.
- Размерность тоже \(r\).
1.4.4 Левое ядро \(N(A^T)\) (left nullspace)
- Все \(y\), для которых \(A^T y=0\), эквивалентно \(y^T A=0\).
- Подпространство в \(\mathbb{R}^m\).
- Размерность \(= m-r\).
1.5 Ранг матрицы (rank)
Эквивалентные определения ранга:
- \(\dim C(A)\).
- \(\dim C(A^T)\).
- Число пивотов в REF.
- Число линейно независимых столбцов (или строк).
Для \(m\times n\): \(\mathrm{rank}(A)\le \min(m,n)\).
1.6 Ортогональные матрицы (orthogonal matrices)
Квадратная \(Q\) ортогональна, если столбцы (и строки) — ортонормальный базис: скалярное произведение разных столбцов \(0\), с самим собой \(1\).
Критерий: \(Q^TQ=QQ^T=I\), то есть \(Q^{-1}=Q^T\).
Свойства:
- Сохраняют длины: \(||Qx||=||x||\).
- Сохраняют углы и dot product: \((Qx)\cdot(Qy)=x\cdot y\).
- \(\det(Q)=\pm 1\).
- Примеры: матрицы поворота и отражения.
1.7 Обратимость матрицы (invertibility)
Квадратная \(A\) обратима (invertible / non-singular), если существует \(A^{-1}\) с \(AA^{-1}=A^{-1}A=I\).
1.7.1 Эквивалентные условия обратимости (\(n\times n\))
Следующие утверждения эквивалентны:
- \(A\) обратима.
- \(\det(A)\neq 0\).
- \(\mathrm{rank}(A)=n\) (полный ранг).
- Столбцы \(A\) линейно независимы.
- Строки \(A\) линейно независимы.
- \(N(A)=\{0\}\).
- \(Ax=b\) имеет единственное решение для любого \(b\).
- RREF\((A)=I\).
Прямоугольные матрицы в стандартном смысле не бывают обратимыми.
1.7.2 Ранг и вырожденность
- Невырожденная (non-singular): \(\mathrm{rank}(A)=n\).
- Вырожденная (singular): \(\mathrm{rank}(A)<n\).
1.8 Важные неравенства
- Теорема о ранге и дефекте (rank–nullity): \(\mathrm{rank}(A)+\mathrm{nullity}(A)=n\).
- Дополнительно:
- \(\mathrm{rank}(A+B)\le \mathrm{rank}(A)+\mathrm{rank}(B)\)
- \(\mathrm{rank}(AB)\le \min(\mathrm{rank}(A),\mathrm{rank}(B))\)
2. Определения
- Матрица коэффициентов (coefficient matrix): матрица коэффициентов при неизвестных.
- Расширенная матрица (augmented matrix): \([A|b]\).
- Пивот (pivot): первый ненулевой элемент строки в REF.
- REF: ступенчатый вид строк.
- RREF: REF с пивотами \(1\) и нулями в остальных местах пивотных столбцов.
- \(C(A)\) (column space): линейная оболочка столбцов \(A\).
- \(C(A^T)\) (row space): линейная оболочка строк \(A\).
- \(N(A)\) (nullspace): решения \(Ax=0\).
- Ранг (rank): \(\dim C(A)\), число пивотов.
- Дефект (nullity): \(\dim N(A)\).
- Ортогональная матрица: \(Q^{-1}=Q^T\), столбцы ортонормированы.
- Обратимая матрица (invertible / non-singular): существует \(A^{-1}\), \(\det(A)\neq 0\), полный ранг.
- Вырожденная матрица (singular): нет обратной, \(\det(A)=0\), ранг неполный.
3. Формулы
- Система уравнений: \(Ax = b\)
- Правило Крамера (Cramer’s rule): \[ x_i = \frac{\det(A_i)}{\det(A)} \]
- Решение через обратную матрицу: \(x = A^{-1}b\)
- Обратная матрица \(2\times 2\): для \(A = \begin{bmatrix} a & b \\ c & d \end{bmatrix}\), \(A^{-1} = \frac{1}{ad-bc} \begin{bmatrix} d & -b \\ -c & a \end{bmatrix}\)
- Определитель \(2\times 2\): \(\det(A) = ad-bc\)
- Условие ортогональности: \(Q^T Q = I\) или \(Q^{-1} = Q^T\)
- Сохранение длины ортогональным преобразованием: \(||Qx|| = ||x||\)
- Теорема о ранге и дефекте: \(\text{rank}(A) + \text{nullity}(A) = n\)
- Оценка ранга суммы: \(\text{rank}(A+B) \le \text{rank}(A) + \text{rank}(B)\)
- Оценка ранга произведения: \(\text{rank}(AB) \le \min(\text{rank}(A), \text{rank}(B))\)
- Неравенство Сильвестра (Sylvester): \(\text{rank}(A) + \text{rank}(B) - n \le \text{rank}(AB)\)
- Линейная независимость: \(c_1\vec{v_1} + c_2\vec{v_2} + \dots + c_n\vec{v_n} = \vec{0}\) имеет только тривиальное решение (\(c_1=c_2=\dots=c_n=0\)).
4. Примеры
4.1. Решение системы линейных уравнений (Лаба 5, Задание 1)
Solve the system: \[ \begin{cases} 2x + 3y + 1z = 8 \\ 4x + 7y + 5z = 20 \\ -2y + 2z = 0 \end{cases} \]
Нажмите, чтобы увидеть решение
- Write the Augmented Matrix: Represent the system of equations as an augmented matrix.
- \(\begin{bmatrix} 2 & 3 & 1 & | & 8 \\ 4 & 7 & 5 & | & 20 \\ 0 & -2 & 2 & | & 0 \end{bmatrix}\)
- Use Gaussian Elimination: Create zeros below the first pivot (the top-left ‘2’).
- Perform the operation \(R_2 \to R_2 - 2R_1\):
- \(\begin{bmatrix} 2 & 3 & 1 & | & 8 \\ 0 & 1 & 3 & | & 4 \\ 0 & -2 & 2 & | & 0 \end{bmatrix}\)
- Create a zero below the second pivot (the ‘1’ in the second row). Perform the operation \(R_3 \to R_3 + 2R_2\):
- \(\begin{bmatrix} 2 & 3 & 1 & | & 8 \\ 0 & 1 & 3 & | & 4 \\ 0 & 0 & 8 & | & 8 \end{bmatrix}\)
- Perform the operation \(R_2 \to R_2 - 2R_1\):
- Perform Back Substitution: Convert the row-echelon matrix back into equations.
- From \(R_3\): \(8z = 8 \implies z = 1\).
- From \(R_2\): \(y + 3z = 4 \implies y + 3(1) = 4 \implies y = 1\).
- From \(R_1\): \(2x + 3y + z = 8 \implies 2x + 3(1) + 1 = 8 \implies 2x + 4 = 8 \implies 2x = 4 \implies x = 2\).
4.2. Нетривиальные решения однородной системы (Лаба 5, Задание 2)
Consider a homogeneous linear system \(A\mathbf{x} = \mathbf{0}\) of \(n\) equations for \(n+1\) unknowns. Does it have a non-trivial solution? (\(\mathbf{x} \neq \mathbf{0}\))
Нажмите, чтобы увидеть решение
- Analyze the Matrix A: The system has \(n\) equations and \(n+1\) unknowns, so the coefficient matrix A has dimensions \(n \times (n+1)\).
- Consider the Rank: The rank of a matrix is the number of pivots, and it cannot be greater than the number of rows or the number of columns. In this case, \(\text{rank}(A) \le n\).
- Relate Rank to Free Variables: The number of free variables in a system is equal to the number of columns (unknowns) minus the rank.
- Number of free variables = \((n+1) - \text{rank}(A)\).
- Determine the Number of Free Variables: Since we know \(\text{rank}(A) \le n\), the number of free variables must be at least \((n+1) - n = 1\).
- Conclusion: A homogeneous system has non-trivial solutions if and only if there is at least one free variable. Since this system is guaranteed to have at least one free variable, it must have a non-trivial solution.
4.3. Существование нетривиальных решений (Лаба 5, Задание 3)
Show that the following homogeneous system has nontrivial solutions: \[ \begin{cases} 1x_1 - 1x_2 + 2x_3 - 1x_4 = 0 \\ 2x_1 + 2x_2 + 0x_3 + 1x_4 = 0 \\ 3x_1 + 1x_2 + 2x_3 - 1x_4 = 0 \end{cases} \]
Нажмите, чтобы увидеть решение
- Analyze the System: This is a homogeneous system with 3 equations and 4 unknowns.
- Apply the Rank Theorem: The number of variables (4) is greater than the number of equations (3). This means the rank of the coefficient matrix can be at most 3.
- Guarantee of Free Variables: The number of free variables is given by (number of variables) - (rank). Since the rank is at most 3, the number of free variables is at least \(4 - 3 = 1\).
- Conclusion: Because the system is homogeneous and is guaranteed to have at least one free variable, there must be an infinite number of solutions, which means there are nontrivial solutions.
4.4. Метод Гаусса с выбором главного элемента (Лаба 5, Задание 4)
Solve the system of equations using Gaussian elimination process with pivoting by maximum element (absolute value): \[ \begin{cases} 2x_1 + 1x_2 + 3x_3 + 2x_4 = 0 \\ 2x_1 + 1x_2 + 5x_3 + 1x_4 = 2 \\ 2x_1 + 1x_2 + 4x_3 + 2x_4 = 1 \\ 1x_1 + 3x_2 + 3x_3 + 2x_4 = 6 \end{cases} \]
Нажмите, чтобы увидеть решение
- Write the Augmented Matrix:
- \(\begin{bmatrix} 2 & 1 & 3 & 2 & | & 0 \\ 2 & 1 & 5 & 1 & | & 2 \\ 2 & 1 & 4 & 2 & | & 1 \\ 1 & 3 & 3 & 2 & | & 6 \end{bmatrix}\)
- Step 1: Pivoting: The elements in the first column are {2, 2, 2, 1}. The maximum absolute value is 2. No row swap is needed.
- Step 1: Elimination:
- \(R_2 \to R_2 - R_1\), \(R_3 \to R_3 - R_1\), \(R_4 \to R_4 - 0.5R_1\)
- \(\begin{bmatrix} 2 & 1 & 3 & 2 & | & 0 \\ 0 & 0 & 2 & -1 & | & 2 \\ 0 & 0 & 1 & 0 & | & 1 \\ 0 & 2.5 & 1.5 & 1 & | & 6 \end{bmatrix}\)
- Step 2: Pivoting: Look at the sub-matrix from row 2 down. The elements in the second column are {0, 0, 2.5}. The maximum absolute value is 2.5 in row 4. Swap R2 and R4.
- \(\begin{bmatrix} 2 & 1 & 3 & 2 & | & 0 \\ 0 & 2.5 & 1.5 & 1 & | & 6 \\ 0 & 0 & 1 & 0 & | & 1 \\ 0 & 0 & 2 & -1 & | & 2 \end{bmatrix}\)
- Step 2: Elimination: No elimination is needed for the second column as the elements below the pivot are already zero.
- Step 3: Pivoting: Look at the sub-matrix from row 3 down. The elements in the third column are {1, 2}. The maximum absolute value is 2 in row 4. Swap R3 and R4.
- \(\begin{bmatrix} 2 & 1 & 3 & 2 & | & 0 \\ 0 & 2.5 & 1.5 & 1 & | & 6 \\ 0 & 0 & 2 & -1 & | & 2 \\ 0 & 0 & 1 & 0 & | & 1 \end{bmatrix}\)
- Step 3: Elimination:
- \(R_4 \to R_4 - 0.5R_3\)
- \(\begin{bmatrix} 2 & 1 & 3 & 2 & | & 0 \\ 0 & 2.5 & 1.5 & 1 & | & 6 \\ 0 & 0 & 2 & -1 & | & 2 \\ 0 & 0 & 0 & 0.5 & | & 0 \end{bmatrix}\)
- Back Substitution:
- From \(R_4\): \(0.5x_4 = 0 \implies x_4 = 0\).
- From \(R_3\): \(2x_3 - x_4 = 2 \implies 2x_3 - 0 = 2 \implies x_3 = 1\).
- From \(R_2\): \(2.5x_2 + 1.5x_3 + x_4 = 6 \implies 2.5x_2 + 1.5(1) + 0 = 6 \implies 2.5x_2 = 4.5 \implies x_2 = 1.8\).
- From \(R_1\): \(2x_1 + x_2 + 3x_3 + 2x_4 = 0 \implies 2x_1 + 1.8 + 3(1) + 0 = 0 \implies 2x_1 + 4.8 = 0 \implies 2x_1 = -4.8 \implies x_1 = -2.4\).
4.5. Анализ линейной системы (Лаба 5, Задание 5)
The system of equations is given: \[ \begin{cases} 1x_1 + 2x_2 + 3x_3 + 5x_4 = b_1 \\ 2x_1 + 4x_2 + 8x_3 + 12x_4 = b_2 \\ 3x_1 + 6x_2 + 7x_3 + 13x_4 = b_3 \end{cases} \]
- Reduce \([A|\mathbf{b}]\) to \([U|\mathbf{c}]\).
- Find the conditions on \((b_1, b_2, b_3)\) to have a solution.
- Describe the column space of A.
- Describe the nullspace of A.
- Find a particular solution to \(A\mathbf{x} = (0, 6, -6)\) and the complete solution \(\mathbf{x}_p + \mathbf{x}_n\).
Нажмите, чтобы увидеть решение
- Reduce to Echelon Form \([U|\mathbf{c}]\):
- \(\begin{bmatrix} 1 & 2 & 3 & 5 & | & b_1 \\ 2 & 4 & 8 & 12 & | & b_2 \\ 3 & 6 & 7 & 13 & | & b_3 \end{bmatrix} \xrightarrow[R_3 \to R_3-3R_1]{R_2 \to R_2-2R_1} \begin{bmatrix} 1 & 2 & 3 & 5 & | & b_1 \\ 0 & 0 & 2 & 2 & | & b_2-2b_1 \\ 0 & 0 & -2 & -2 & | & b_3-3b_1 \end{bmatrix} \xrightarrow{R_3 \to R_3+R_2} \begin{bmatrix} 1 & 2 & 3 & 5 & | & b_1 \\ 0 & 0 & 2 & 2 & | & b_2-2b_1 \\ 0 & 0 & 0 & 0 & | & b_3-3b_1+b_2-2b_1 \end{bmatrix}\)
- \(U|\mathbf{c} = \begin{bmatrix} 1 & 2 & 3 & 5 & | & b_1 \\ 0 & 0 & 2 & 2 & | & b_2-2b_1 \\ 0 & 0 & 0 & 0 & | & b_2+b_3-5b_1 \end{bmatrix}\)
- Conditions on \(\mathbf{b}\) for a Solution:
- For the system to be consistent, the last row must not be of the form \([0 \ 0 \ 0 \ 0 \ | \ \text{non-zero}]\).
- Therefore, the condition is \(b_2 + b_3 - 5b_1 = 0\).
- Column Space of A:
- The column space is the set of all vectors \(\mathbf{b}\) for which a solution exists.
- This is the plane in \(\mathbb{R}^3\) defined by the equation \(-5b_1 + b_2 + b_3 = 0\).
- Nullspace of A:
- Solve \(A\mathbf{x} = \mathbf{0}\) using the echelon form. The pivots are in columns 1 and 3 (\(x_1, x_3\)). The free variables are \(x_2, x_4\).
- \(2x_3 + 2x_4 = 0 \implies x_3 = -x_4\).
- \(x_1 + 2x_2 + 3x_3 + 5x_4 = 0 \implies x_1 + 2x_2 + 3(-x_4) + 5x_4 = 0 \implies x_1 + 2x_2 + 2x_4 = 0 \implies x_1 = -2x_2 - 2x_4\).
- The nullspace solution is \(\mathbf{x}_n = \begin{bmatrix} -2x_2 - 2x_4 \\ x_2 \\ -x_4 \\ x_4 \end{bmatrix} = x_2\begin{bmatrix} -2 \\ 1 \\ 0 \\ 0 \end{bmatrix} + x_4\begin{bmatrix} -2 \\ 0 \\ -1 \\ 1 \end{bmatrix}\). The nullspace is the span of these two vectors.
- Particular and Complete Solution:
- We are given \(\mathbf{b} = (0, 6, -6)\). Check the condition: \(6 + (-6) - 5(0) = 0\). It holds.
- Use the reduced system with this \(\mathbf{b}\):
- \(2x_3 + 2x_4 = b_2-2b_1 = 6-0=6 \implies x_3 + x_4 = 3\).
- \(x_1 + 2x_2 + 3x_3 + 5x_4 = b_1 = 0\).
- To find a particular solution \(\mathbf{x}_p\), set free variables to zero: \(x_2=0, x_4=0\).
- \(x_3 + 0 = 3 \implies x_3 = 3\).
- \(x_1 + 0 + 3(3) + 0 = 0 \implies x_1 = -9\).
- So, \(\mathbf{x}_p = \begin{bmatrix} -9 \\ 0 \\ 3 \\ 0 \end{bmatrix}\).
- The complete solution is \(\mathbf{x} = \mathbf{x}_p + \mathbf{x}_n = \begin{bmatrix} -9 \\ 0 \\ 3 \\ 0 \end{bmatrix} + c_1\begin{bmatrix} -2 \\ 1 \\ 0 \\ 0 \end{bmatrix} + c_2\begin{bmatrix} -2 \\ 0 \\ -1 \\ 1 \end{bmatrix}\).
Ответ:
- See the echelon form above.
- The condition is \(b_2 + b_3 - 5b_1 = 0\).
- The column space is the plane \(-5b_1 + b_2 + b_3 = 0\).
- The nullspace is the span of \(\begin{bmatrix} -2 \\ 1 \\ 0 \\ 0 \end{bmatrix}\) and \(\begin{bmatrix} -2 \\ 0 \\ -1 \\ 1 \end{bmatrix}\).
- The complete solution is \(\mathbf{x} = \begin{bmatrix} -9 \\ 0 \\ 3 \\ 0 \end{bmatrix} + c_1\begin{bmatrix} -2 \\ 1 \\ 0 \\ 0 \end{bmatrix} + c_2\begin{bmatrix} -2 \\ 0 \\ -1 \\ 1 \end{bmatrix}\).
4.6. Ранг матрицы (Лекция 5, Пример 1)
Find the rank of the matrix \(A = \begin{bmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \\ 7 & 8 & 9 \end{bmatrix}\).
Нажмите, чтобы увидеть решение
- Definition of Rank: The rank of a matrix is the maximum number of linearly independent columns (or rows) in the matrix.
- Check for Linear Dependence: We can inspect the relationship between the columns. Let the columns be \(\mathbf{c}_1, \mathbf{c}_2, \mathbf{c}_3\).
- Notice that \(\mathbf{c}_2 - \mathbf{c}_1 = \begin{bmatrix} 2 \\ 5 \\ 8 \end{bmatrix} - \begin{bmatrix} 1 \\ 4 \\ 7 \end{bmatrix} = \begin{bmatrix} 1 \\ 1 \\ 1 \end{bmatrix}\).
- Also, \(\mathbf{c}_3 - \mathbf{c}_2 = \begin{bmatrix} 3 \\ 6 \\ 9 \end{bmatrix} - \begin{bmatrix} 2 \\ 5 \\ 8 \end{bmatrix} = \begin{bmatrix} 1 \\ 1 \\ 1 \end{bmatrix}\).
- Since \(\mathbf{c}_2 - \mathbf{c}_1 = \mathbf{c}_3 - \mathbf{c}_2\), we can rearrange this to get \(\mathbf{c}_1 - 2\mathbf{c}_2 + \mathbf{c}_3 = \mathbf{0}\). This shows that the columns are linearly dependent.
- Find a Linearly Independent Subset: The columns are not all linearly independent, so the rank is less than 3.
- Columns \(\mathbf{c}_1 = \begin{bmatrix} 1 \\ 4 \\ 7 \end{bmatrix}\) and \(\mathbf{c}_2 = \begin{bmatrix} 2 \\ 5 \\ 8 \end{bmatrix}\) are not scalar multiples of each other, so they are linearly independent.
- Conclusion: The largest set of linearly independent columns has two vectors. Therefore, the rank is 2.
4.7. Ранг и вырожденность (Лекция 5, Пример 2)
Determine the rank of the matrices \(A = \begin{bmatrix} 1 & 2 \\ 2 & 4 \end{bmatrix}\) and \(B = \begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix}\) and state whether they are singular or non-singular.
Нажмите, чтобы увидеть решение
- Analyze Matrix A:
- Linear Dependence: The second column, \(\begin{bmatrix} 2 \\ 4 \end{bmatrix}\), is exactly 2 times the first column, \(\begin{bmatrix} 1 \\ 2 \end{bmatrix}\). Since the columns are linearly dependent, the maximum number of linearly independent columns is 1.
- Rank: The rank of A is 1.
- Singularity: A square matrix is singular if its determinant is 0. \(\det(A) = (1)(4) - (2)(2) = 0\). Since the determinant is 0, the matrix is singular. (Note: A square matrix is singular if and only if its rank is less than its dimension).
- Analyze Matrix B:
- Linear Dependence: The second column, \(\begin{bmatrix} 2 \\ 4 \end{bmatrix}\), is not a scalar multiple of the first column, \(\begin{bmatrix} 1 \\ 3 \end{bmatrix}\). The columns are linearly independent.
- Rank: The rank of B is 2.
- Singularity: \(\det(B) = (1)(4) - (2)(3) = 4 - 6 = -2\). Since the determinant is not 0, the matrix is non-singular.
Ответ:
- Matrix A has rank 1 and is singular.
- Matrix B has rank 2 and is non-singular.
4.8. Столбцовое пространство матрицы (Лекция 5, Пример 3)
For the matrix \(A = \begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix}\), describe its column space.
Нажмите, чтобы увидеть решение
- Identify the Column Vectors: The columns of the matrix A are the vectors that span the column space.
- Column 1: \(\mathbf{c}_1 = \begin{bmatrix} 1 \\ 3 \end{bmatrix}\)
- Column 2: \(\mathbf{c}_2 = \begin{bmatrix} 2 \\ 4 \end{bmatrix}\)
- Define the Column Space: The column space is the set of all possible linear combinations of the column vectors. This is the subspace spanned by the columns.
- Write the General Form: Any vector \(\mathbf{v}\) in the column space of A can be written in the form \(\mathbf{v} = k_1\mathbf{c}_1 + k_2\mathbf{c}_2\) for some scalars \(k_1\) and \(k_2\).
- Column space of A = \(\left\{ k_1\begin{bmatrix} 1 \\ 3 \end{bmatrix} + k_2\begin{bmatrix} 2 \\ 4 \end{bmatrix} \ \middle|\ k_1, k_2 \in \mathbb{R} \right\}\).
- Geometric Interpretation: Since the two column vectors are linearly independent (as shown in the previous problem), they form a basis for \(\mathbb{R}^2\). Therefore, their column space is the entire \(\mathbb{R}^2\) plane.
4.9. Решение методом Гаусса (Лекция 5, Пример 4)
Solve the system using Gaussian elimination: \[ \begin{cases} 3x_1 + 2x_2 = 7 \\ 2x_1 - 4x_2 = -2 \end{cases} \]
Нажмите, чтобы увидеть решение
- Write the Augmented Matrix:
- \(\begin{bmatrix} 3 & 2 & | & 7 \\ 2 & -4 & | & -2 \end{bmatrix}\)
- Perform Row Operations: The goal is to create a zero in the bottom-left position to get an upper triangular matrix (row echelon form).
- Perform the operation \(R_2 \to R_2 - \frac{2}{3}R_1\).
- New \(R_2\): \([2 - \frac{2}{3}(3), -4 - \frac{2}{3}(2) \ | \ -2 - \frac{2}{3}(7)] = [0, -4 - \frac{4}{3} \ | \ -2 - \frac{14}{3}] = [0, -\frac{16}{3} \ | \ -\frac{20}{3}]\)
- The matrix becomes: \(\begin{bmatrix} 3 & 2 & | & 7 \\ 0 & -16/3 & | & -20/3 \end{bmatrix}\)
- Use Back Substitution:
- From the second row: \(-\frac{16}{3}x_2 = -\frac{20}{3} \implies 16x_2 = 20 \implies x_2 = \frac{20}{16} = \frac{5}{4}\).
- From the first row: \(3x_1 + 2x_2 = 7 \implies 3x_1 + 2(\frac{5}{4}) = 7 \implies 3x_1 + \frac{5}{2} = 7 \implies 3x_1 = \frac{14}{2} - \frac{5}{2} = \frac{9}{2} \implies x_1 = \frac{3}{2}\).
4.10. Решение методом Гаусса–Жордана (Лекция 5, Пример 5)
Solve the system using Gauss-Jordan elimination: \[ \begin{cases} 3x_1 + 2x_2 = 7 \\ 2x_1 - 4x_2 = -2 \end{cases} \]
Нажмите, чтобы увидеть решение
- Write the Augmented Matrix:
- \(\begin{bmatrix} 3 & 2 & | & 7 \\ 2 & -4 & | & -2 \end{bmatrix}\)
- Create Row Echelon Form: First, perform Gaussian elimination as in the previous example.
- \(R_2 \to R_2 - \frac{2}{3}R_1\) gives \(\begin{bmatrix} 3 & 2 & | & 7 \\ 0 & -16/3 & | & -20/3 \end{bmatrix}\).
- Normalize the Pivots: Make each pivot (the first non-zero entry in each row) equal to 1.
- \(R_2 \to -\frac{3}{16}R_2\) gives \(\begin{bmatrix} 3 & 2 & | & 7 \\ 0 & 1 & | & 5/4 \end{bmatrix}\).
- \(R_1 \to \frac{1}{3}R_1\) gives \(\begin{bmatrix} 1 & 2/3 & | & 7/3 \\ 0 & 1 & | & 5/4 \end{bmatrix}\).
- Create Zeros Above the Pivots: The goal is to reach reduced row echelon form (an identity matrix on the left).
- Perform the operation \(R_1 \to R_1 - \frac{2}{3}R_2\).
- New \(R_1\): \([1-0, \frac{2}{3}-\frac{2}{3}(1) \ | \ \frac{7}{3} - \frac{2}{3}(\frac{5}{4})] = [1, 0 \ | \ \frac{7}{3} - \frac{10}{12}] = [1, 0 \ | \ \frac{28}{12} - \frac{10}{12}] = [1, 0 \ | \ \frac{18}{12}] = [1, 0 \ | \ \frac{3}{2}]\).
- The matrix becomes: \(\begin{bmatrix} 1 & 0 & | & 3/2 \\ 0 & 1 & | & 5/4 \end{bmatrix}\).
- Read the Solution: The matrix is now in the form \([I|\mathbf{x}]\), so the solution is directly visible.
- \(x_1 = 3/2\)
- \(x_2 = 5/4\)
4.11. Система не имеет решения (Лекция 5, Пример 6)
Consider the \(3 \times 3\) linear system: \[ \begin{cases} x + y + z = 1 \\ 2x + 2y + 2z = 3 \\ 3x + y - z = 2 \end{cases} \] Show this system has no solution using Gaussian elimination.
Нажмите, чтобы увидеть решение
- Write the Augmented Matrix:
- \(\begin{bmatrix} 1 & 1 & 1 & | & 1 \\ 2 & 2 & 2 & | & 3 \\ 3 & 1 & -1 & | & 2 \end{bmatrix}\)
- Perform Row Operations:
- Perform \(R_2 \to R_2 - 2R_1\).
- New \(R_2\): \([2-2(1), 2-2(1), 2-2(1) \ | \ 3-2(1)] = [0, 0, 0 \ | \ 1]\).
- The matrix becomes: \(\begin{bmatrix} 1 & 1 & 1 & | & 1 \\ 0 & 0 & 0 & | & 1 \\ 3 & 1 & -1 & | & 2 \end{bmatrix}\).
- Analyze the Result:
- The second row of the matrix corresponds to the equation \(0x + 0y + 0z = 1\).
- This simplifies to the equation \(0 = 1\), which is a contradiction.
- Conclusion:
- Since the row reduction process leads to a contradictory statement, the original system of equations is inconsistent.
4.12. Решение по правилу Крамера (Лекция 5, Пример 7)
Solve the system using Cramer’s Rule: \[ \begin{cases} 3x_1 + 2x_2 = 7 \\ 2x_1 - 4x_2 = -2 \end{cases} \]
Нажмите, чтобы увидеть решение
- Define the Coefficient Matrix and Vector:
- Coefficient matrix \(A = \begin{bmatrix} 3 & 2 \\ 2 & -4 \end{bmatrix}\).
- Constant vector \(\mathbf{b} = \begin{bmatrix} 7 \\ -2 \end{bmatrix}\).
- Calculate the Determinant of A:
- \(D = \det(A) = (3)(-4) - (2)(2) = -12 - 4 = -16\).
- Since \(D \neq 0\), a unique solution exists.
- Calculate the Determinant for \(x_1\): Replace the first column of A with the vector \(\mathbf{b}\).
- \(A_1 = \begin{bmatrix} 7 & 2 \\ -2 & -4 \end{bmatrix}\).
- \(D_1 = \det(A_1) = (7)(-4) - (2)(-2) = -28 + 4 = -24\).
- Calculate the Determinant for \(x_2\): Replace the second column of A with the vector \(\mathbf{b}\).
- \(A_2 = \begin{bmatrix} 3 & 7 \\ 2 & -2 \end{bmatrix}\).
- \(D_2 = \det(A_2) = (3)(-2) - (7)(2) = -6 - 14 = -20\).
- Find the Solutions:
- \(x_1 = D_1 / D = -24 / -16 = 3/2\).
- \(x_2 = D_2 / D = -20 / -16 = 5/4\).
4.13. Анализ столбцового пространства (Туториал 5, Задание 1)
Find the column space of \(A = \begin{bmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \\ 7 & 8 & 9 \end{bmatrix}\). Describe it geometrically and find its dimension.
Нажмите, чтобы увидеть решение
- Identify the Column Vectors: The column space is the span of the column vectors of A.
- \(\mathbf{c}_1 = \begin{bmatrix} 1 \\ 4 \\ 7 \end{bmatrix}\), \(\mathbf{c}_2 = \begin{bmatrix} 2 \\ 5 \\ 8 \end{bmatrix}\), \(\mathbf{c}_3 = \begin{bmatrix} 3 \\ 6 \\ 9 \end{bmatrix}\)
- Determine Linear Independence: We check if the vectors are linearly independent. Notice that \(\mathbf{c}_1 + \mathbf{c}_3 = 2\mathbf{c}_2\). This can be rewritten as \(\mathbf{c}_1 - 2\mathbf{c}_2 + \mathbf{c}_3 = \mathbf{0}\), which shows the columns are linearly dependent.
- Find a Basis for the Column Space: Since the three vectors are dependent, the dimension of the column space (the rank) is less than 3. Vectors \(\mathbf{c}_1\) and \(\mathbf{c}_2\) are not scalar multiples of each other, so they are linearly independent. They can form a basis for the column space.
- Describe the Column Space: The column space is the set of all linear combinations of the basis vectors.
- \(C(A) = \text{span}\{\mathbf{c}_1, \mathbf{c}_2\} = \left\{ k_1\begin{bmatrix} 1 \\ 4 \\ 7 \end{bmatrix} + k_2\begin{bmatrix} 2 \\ 5 \\ 8 \end{bmatrix} \ \middle|\ k_1, k_2 \in \mathbb{R} \right\}\).
- Geometric Description and Dimension: The dimension of the column space is the number of vectors in its basis, which is 2. A two-dimensional subspace of \(\mathbb{R}^3\) is a plane passing through the origin.
4.14. Столбцовое и строчное пространства (Туториал 5, Задание 2)
For \(B = \begin{bmatrix} 1 & 2 & 1 \\ 2 & 4 & 2 \\ 3 & 6 & 3 \end{bmatrix}\): Find the column space and its dimension, find the row space and its dimension, and verify they have the same dimension.
Нажмите, чтобы увидеть решение
- Find the Column Space:
- The column vectors are \(\mathbf{c}_1 = \begin{bmatrix} 1 \\ 2 \\ 3 \end{bmatrix}\), \(\mathbf{c}_2 = \begin{bmatrix} 2 \\ 4 \\ 6 \end{bmatrix}\), \(\mathbf{c}_3 = \begin{bmatrix} 1 \\ 2 \\ 3 \end{bmatrix}\).
- We can see that \(\mathbf{c}_2 = 2\mathbf{c}_1\) and \(\mathbf{c}_3 = \mathbf{c}_1\). All columns are multiples of the first column.
- The column space is the span of the first column: \(C(B) = \text{span}\left\{\begin{bmatrix} 1 \\ 2 \\ 3 \end{bmatrix}\right\}\). This is a line in \(\mathbb{R}^3\).
- The dimension of the column space is 1.
- Find the Row Space:
- The row vectors are \(\mathbf{r}_1 = \begin{bmatrix} 1 & 2 & 1 \end{bmatrix}\), \(\mathbf{r}_2 = \begin{bmatrix} 2 & 4 & 2 \end{bmatrix}\), \(\mathbf{r}_3 = \begin{bmatrix} 3 & 6 & 3 \end{bmatrix}\).
- We can see that \(\mathbf{r}_2 = 2\mathbf{r}_1\) and \(\mathbf{r}_3 = 3\mathbf{r}_1\). All rows are multiples of the first row.
- The row space is the span of the first row: \(R(B) = \text{span}\{\begin{bmatrix} 1 & 2 & 1 \end{bmatrix}\}\). This is a line in \(\mathbb{R}^3\).
- The dimension of the row space is 1.
- Verify Dimensions: The dimension of the column space is 1, and the dimension of the row space is 1. They are equal, as expected by the Rank Theorem.
4.15. Проверка ортогональности матриц (Туториал 5, Задание 3)
Which of these matrices are orthogonal? Verify your answers. \(Q_1 = \begin{bmatrix} \cos\theta & -\sin\theta \\ \sin\theta & \cos\theta \end{bmatrix}\), \(Q_2 = \begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix}\), \(Q_3 = \begin{bmatrix} 1 & 0 & 0 \\ 0 & 0 & 1 \\ 0 & 1 & 0 \end{bmatrix}\)
Нажмите, чтобы увидеть решение
- Condition for Orthogonality: A matrix \(Q\) is orthogonal if its columns form an orthonormal set (they are mutually orthogonal unit vectors). This is equivalent to the condition \(Q^T Q = I\).
- Check \(Q_1\):
- The columns are \(\mathbf{q}_1 = \begin{bmatrix} \cos\theta \\ \sin\theta \end{bmatrix}\) and \(\mathbf{q}_2 = \begin{bmatrix} -\sin\theta \\ \cos\theta \end{bmatrix}\).
- Check norms (lengths): \(||\mathbf{q}_1||^2 = \cos^2\theta + \sin^2\theta = 1\) and \(||\mathbf{q}_2||^2 = (-\sin\theta)^2 + \cos^2\theta = 1\). The columns are unit vectors.
- Check orthogonality (dot product): \(\mathbf{q}_1 \cdot \mathbf{q}_2 = (\cos\theta)(-\sin\theta) + (\sin\theta)(\cos\theta) = 0\). The columns are orthogonal.
- Conclusion: \(Q_1\) is an orthogonal matrix.
- Check \(Q_2\):
- The columns are \(\mathbf{q}_1 = \begin{bmatrix} 1 \\ 1 \end{bmatrix}\) and \(\mathbf{q}_2 = \begin{bmatrix} 1 \\ -1 \end{bmatrix}\).
- Check norms: \(||\mathbf{q}_1||^2 = 1^2 + 1^2 = 2 \neq 1\). The columns are not unit vectors.
- Conclusion: \(Q_2\) is not orthogonal.
- Check \(Q_3\):
- The columns are \(\mathbf{q}_1 = \begin{bmatrix} 1 \\ 0 \\ 0 \end{bmatrix}\), \(\mathbf{q}_2 = \begin{bmatrix} 0 \\ 0 \\ 1 \end{bmatrix}\), \(\mathbf{q}_3 = \begin{bmatrix} 0 \\ 1 \\ 0 \end{bmatrix}\).
- These are the standard basis vectors, which are known to be unit vectors and mutually orthogonal.
- Conclusion: \(Q_3\) is an orthogonal matrix (it is a permutation matrix).
4.16. Свойства ортогональных матриц (Туториал 5, Задание 4)
Let \(Q\) be an \(n \times n\) orthogonal matrix. Prove that:
- \(||Q\mathbf{x}|| = ||\mathbf{x}||\) for any \(\mathbf{x} \in \mathbb{R}^n\)
- \(|\det(Q)| = 1\)
Нажмите, чтобы увидеть решение
Proof of (1): Length Preservation
- Start with the squared norm: \(||Q\mathbf{x}||^2 = (Q\mathbf{x}) \cdot (Q\mathbf{x})\).
- Using the transpose property for dot products: \((Q\mathbf{x}) \cdot (Q\mathbf{x}) = (Q\mathbf{x})^T(Q\mathbf{x})\).
- Apply the transpose rule \((AB)^T = B^T A^T\): \((Q\mathbf{x})^T(Q\mathbf{x}) = (\mathbf{x}^T Q^T)(Q\mathbf{x}) = \mathbf{x}^T(Q^T Q)\mathbf{x}\).
- By definition of an orthogonal matrix, \(Q^T Q = I\).
- Substitute: \(\mathbf{x}^T(I)\mathbf{x} = \mathbf{x}^T\mathbf{x} = ||\mathbf{x}||^2\).
- We have shown \(||Q\mathbf{x}||^2 = ||\mathbf{x}||^2\). Since norms are non-negative, taking the square root gives \(||Q\mathbf{x}|| = ||\mathbf{x}||\).
Proof of (2): Determinant
- Start with the definition of an orthogonal matrix: \(Q^T Q = I\).
- Take the determinant of both sides: \(\det(Q^T Q) = \det(I)\).
- Use determinant properties \(\det(AB) = \det(A)\det(B)\) and \(\det(A^T) = \det(A)\): \(\det(Q^T)\det(Q) = 1\).
- This becomes \(\det(Q)\det(Q) = 1\), or \((\det(Q))^2 = 1\).
- Taking the square root of both sides gives \(\det(Q) = \pm 1\).
- Therefore, the absolute value is \(|\det(Q)| = 1\).
4.17. Проверка обратимости (Туториал 5, Задание 5)
Determine which matrices are invertible and find inverses when possible: \(A = \begin{bmatrix} 2 & 1 \\ 5 & 3 \end{bmatrix}\), \(B = \begin{bmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \\ 7 & 8 & 9 \end{bmatrix}\), \(C = \begin{bmatrix} 1 & 0 & 2 \\ 0 & 1 & -1 \\ 2 & 0 & 4 \end{bmatrix}\)
Нажмите, чтобы увидеть решение
- Analyze Matrix A:
- A matrix is invertible if and only if its determinant is non-zero.
- \(\det(A) = (2)(3) - (1)(5) = 6 - 5 = 1\).
- Since \(\det(A) \neq 0\), A is invertible.
- The inverse is \(A^{-1} = \frac{1}{1}\begin{bmatrix} 3 & -1 \\ -5 & 2 \end{bmatrix} = \begin{bmatrix} 3 & -1 \\ -5 & 2 \end{bmatrix}\).
- Analyze Matrix B:
- \(\det(B) = 1(5\cdot9 - 6\cdot8) - 2(4\cdot9 - 6\cdot7) + 3(4\cdot8 - 5\cdot7)\)
- \(= 1(45 - 48) - 2(36 - 42) + 3(32 - 35) = 1(-3) - 2(-6) + 3(-3) = -3 + 12 - 9 = 0\).
- Since \(\det(B) = 0\), B is not invertible.
- Analyze Matrix C:
- Notice that the third row is 2 times the first row. This means the rows are linearly dependent, so the determinant must be 0.
- \(\det(C) = 1(1\cdot4 - (-1)\cdot0) - 0(...) + 2(0\cdot0 - 1\cdot2) = 1(4) - 0 + 2(-2) = 4 - 4 = 0\).
- Since \(\det(C) = 0\), C is not invertible.
4.18. Условия обратимости (Туториал 5, Задание 6)
Prove that for a square matrix \(A\), the following are equivalent:
- \(A\) is invertible
- \(A\mathbf{x} = \mathbf{0}\) has only the trivial solution
- The columns of \(A\) are linearly independent
- \(\det(A) \neq 0\)
Нажмите, чтобы увидеть решение
This is a statement of the Invertible Matrix Theorem. We can prove the equivalence by showing a cycle of implications, for example, \((1) \Rightarrow (2) \Rightarrow (3) \Rightarrow (4) \Rightarrow (1)\).
- \((1) \Rightarrow (2)\): Assume A is invertible. Let \(A\mathbf{x} = \mathbf{0}\). We can multiply both sides by \(A^{-1}\): \(A^{-1}(A\mathbf{x}) = A^{-1}\mathbf{0}\). This simplifies to \((A^{-1}A)\mathbf{x} = \mathbf{0}\), then \(I\mathbf{x} = \mathbf{0}\), which means \(\mathbf{x} = \mathbf{0}\). Thus, the only solution is the trivial one.
- \((2) \Rightarrow (3)\): Assume \(A\mathbf{x} = \mathbf{0}\) has only the trivial solution. The definition of linear independence for the columns of A (\(\mathbf{a}_1, ..., \mathbf{a}_n\)) is that the only solution to the equation \(c_1\mathbf{a}_1 + ... + c_n\mathbf{a}_n = \mathbf{0}\) is \(c_1=...=c_n=0\). This vector equation is identical to \(A\mathbf{c} = \mathbf{0}\), where \(\mathbf{c}\) is the vector of coefficients. Our assumption means that \(\mathbf{c}\) must be the zero vector, which is precisely the definition of linear independence for the columns of A.
- \((3) \Rightarrow (4)\): Assume the columns of A are linearly independent. This means the rank of the \(n \times n\) matrix is \(n\). A square matrix has full rank if and only if its determinant is non-zero. (This follows from the fact that a matrix with full rank can be row-reduced to the identity matrix, which has a determinant of 1, and row operations only change the determinant by non-zero scalar factors).
- \((4) \Rightarrow (1)\): Assume \(\det(A) \neq 0\). The formula for the inverse of a matrix is \(A^{-1} = \frac{1}{\det(A)}\text{adj}(A)\). Since the determinant is non-zero, this formula is well-defined and an inverse matrix exists.
4.19. Вычисление ранга (Туториал 5, Задание 7)
Find the rank of these matrices: \(A = \begin{bmatrix} 1 & 2 & 0 \\ 3 & 6 & 1 \\ 2 & 4 & -1 \end{bmatrix}\), \(B = \begin{bmatrix} 1 & 1 & 1 & 1 \\ 1 & 2 & 3 & 4 \\ 1 & 3 & 5 & 7 \end{bmatrix}\), \(C = \begin{bmatrix} 2 & 4 & 6 \\ 1 & 1 & 1 \\ 0 & 1 & 2 \end{bmatrix}\)
Нажмите, чтобы увидеть решение
We use Gaussian elimination to find the row echelon form. The rank is the number of non-zero rows (pivots).
- Matrix A:
- \(\begin{bmatrix} 1 & 2 & 0 \\ 3 & 6 & 1 \\ 2 & 4 & -1 \end{bmatrix} \xrightarrow[R_3 \to R_3 - 2R_1]{R_2 \to R_2 - 3R_1} \begin{bmatrix} 1 & 2 & 0 \\ 0 & 0 & 1 \\ 0 & 0 & -1 \end{bmatrix} \xrightarrow{R_3 \to R_3 + R_2} \begin{bmatrix} 1 & 2 & 0 \\ 0 & 0 & 1 \\ 0 & 0 & 0 \end{bmatrix}\)
- There are 2 non-zero rows. Rank(A) = 2.
- Matrix B:
- \(\begin{bmatrix} 1 & 1 & 1 & 1 \\ 1 & 2 & 3 & 4 \\ 1 & 3 & 5 & 7 \end{bmatrix} \xrightarrow[R_3 \to R_3 - R_1]{R_2 \to R_2 - R_1} \begin{bmatrix} 1 & 1 & 1 & 1 \\ 0 & 1 & 2 & 3 \\ 0 & 2 & 4 & 6 \end{bmatrix} \xrightarrow{R_3 \to R_3 - 2R_2} \begin{bmatrix} 1 & 1 & 1 & 1 \\ 0 & 1 & 2 & 3 \\ 0 & 0 & 0 & 0 \end{bmatrix}\)
- There are 2 non-zero rows. Rank(B) = 2.
- Matrix C:
- \(\begin{bmatrix} 2 & 4 & 6 \\ 1 & 1 & 1 \\ 0 & 1 & 2 \end{bmatrix} \xrightarrow{R_1 \leftrightarrow R_2} \begin{bmatrix} 1 & 1 & 1 \\ 2 & 4 & 6 \\ 0 & 1 & 2 \end{bmatrix} \xrightarrow{R_2 \to R_2 - 2R_1} \begin{bmatrix} 1 & 1 & 1 \\ 0 & 2 & 4 \\ 0 & 1 & 2 \end{bmatrix} \xrightarrow{R_3 \to R_3 - \frac{1}{2}R_2} \begin{bmatrix} 1 & 1 & 1 \\ 0 & 2 & 4 \\ 0 & 0 & 0 \end{bmatrix}\)
- There are 2 non-zero rows. Rank(C) = 2.
4.20. Неравенства для ранга (Туториал 5, Задание 8)
Let \(A = \begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix}\), \(B = \begin{bmatrix} 0 & 0 \\ 0 & 1 \end{bmatrix}\). Verify the rank inequalities:
- \(\text{rank}(A+B) \le \text{rank}(A) + \text{rank}(B)\)
- \(\text{rank}(AB) \le \min(\text{rank}(A), \text{rank}(B))\)
- \(\text{rank}(A) + \text{rank}(B) - n \le \text{rank}(AB)\) (Sylvester)
Нажмите, чтобы увидеть решение
First, we find the ranks of the given matrices:
- \(\text{Rank}(A) = 1\) (one non-zero row/column)
- \(\text{Rank}(B) = 1\) (one non-zero row/column)
- The matrix size is \(n=2\).
- Verify inequality (1):
- \(A + B = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}\).
- \(\text{rank}(A+B) = 2\).
- The inequality is \(2 \le 1 + 1\), which is \(2 \le 2\). This is true.
- Verify inequality (2):
- \(AB = \begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix} \begin{bmatrix} 0 & 0 \\ 0 & 1 \end{bmatrix} = \begin{bmatrix} 0 & 0 \\ 0 & 0 \end{bmatrix}\).
- \(\text{rank}(AB) = 0\).
- \(\min(\text{rank}(A), \text{rank}(B)) = \min(1, 1) = 1\).
- The inequality is \(0 \le 1\). This is true.
- Verify inequality (3) (Sylvester’s Inequality):
- \(\text{rank}(A) + \text{rank}(B) - n = 1 + 1 - 2 = 0\).
- \(\text{rank}(AB) = 0\).
- The inequality is \(0 \le 0\). This is true.
4.21. Ранг, определитель и обратимость (Туториал 5, Задание 10)
For each matrix, compute the rank and determinant, and determine invertibility: \(D = \begin{bmatrix} 2 & 1 & 1 \\ 1 & 2 & 1 \\ 1 & 1 & 2 \end{bmatrix}\), \(E = \begin{bmatrix} 1 & 2 & 3 \\ 2 & 4 & 6 \\ 1 & 1 & 1 \end{bmatrix}\), \(F = \begin{bmatrix} 1 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & 3 \end{bmatrix}\)
Нажмите, чтобы увидеть решение
- Matrix D:
- Determinant: \(\det(D) = 2(2\cdot2 - 1\cdot1) - 1(1\cdot2 - 1\cdot1) + 1(1\cdot1 - 2\cdot1) = 2(3) - 1(1) + 1(-1) = 6 - 1 - 1 = 4\).
- Rank: Since the determinant of the \(3 \times 3\) matrix is non-zero, it has full rank. Rank(D) = 3.
- Invertibility: Since \(\det(D) \neq 0\), the matrix is invertible.
- Matrix E:
- Determinant: The second row is twice the first row, so the rows are linearly dependent. Therefore, \(\det(E) = 0\).
- Rank: Since the determinant is 0, the rank must be less than 3. The first and third rows, \(\begin{bmatrix} 1 & 2 & 3 \end{bmatrix}\) and \(\begin{bmatrix} 1 & 1 & 1 \end{bmatrix}\), are not multiples of each other, so they are linearly independent. Thus, the rank is at least 2. Rank(E) = 2.
- Invertibility: Since \(\det(E) = 0\), the matrix is not invertible.
- Matrix F:
- Determinant: F is a diagonal matrix, so its determinant is the product of the diagonal elements: \(\det(F) = 1 \cdot 2 \cdot 3 = 6\).
- Rank: Since the determinant is non-zero, the matrix has full rank. Rank(F) = 3.
- Invertibility: Since \(\det(F) \neq 0\), the matrix is invertible.
Ответ:
- D: Rank = 3, Determinant = 4, Invertible.
- E: Rank = 2, Determinant = 0, Not Invertible.
- F: Rank = 3, Determinant = 6, Invertible.